5 research outputs found

    Quantum Key Distribution in OpenSSL

    Get PDF
    Most of the current communications and systems rely on asymmetric cryptography, which is used to share a unique secret key between two parties communicating, in order to encrypt the information exchanged. Recently, many researchers state that quantum computing will be a threat in 15-20 years. At the moment there is no quantum computer able to crack classical cryptography, however, a solution to address the threat should be found as soon as possible before classical cryptography reaches its expiration date, and all communications and systems will be cracked. Quantum cryptography is considered a problem, but from another perspective, it is also the solution to it. In fact, this technology is strong enough to protect both from quantum and classical attacks. Quantum cryptography is considered secure because it is based on quantum physics laws. The benefits of quantum cryptography, combined with the ones of symmetric cryptography offer an alternative solution to the Key Exchange problem: Quantum Key Distribution (QKD). The technology is a protocol that describes a cryptographic technique to exchange a secret key between two end users/applications within a communication. This thesis starts by presenting the quantum threat, and the reasons that make quantum computing risky for classical communications and systems. Moreover, it states the importance to invest resources in this field of research in order to find a solution to address the problem once it will be a real risk. Finally, I explain my contribution to Cefriel activities in the context of Quantum Key Distribution. The internship activity described is a demonstrative approach to integrate QKD technology into the OpenSSL library. The project aims to demonstrate the effectiveness and the feasibility of using QKD technology in SSL communications

    On the acceptance by code reviewers of candidate security patches suggested by Automated Program Repair tools

    Get PDF
    Background: Testing and validation of the semantic correctness of patches provided by tools for Automated Program Repairs (APR) has received a lot of attention. Yet, the eventual acceptance or rejection of suggested patches for real world projects by humans patch reviewers has received a limited attention. Objective: To address this issue, we plan to investigate whether (possibly incorrect) security patches suggested by APR tools are recognized by human reviewers. We also want to investigate whether knowing that a patch was produced by an allegedly specialized tool does change the decision of human reviewers. Method: In the first phase, using a balanced design, we propose to human reviewers a combination of patches proposed by APR tools for different vulnerabilities and ask reviewers to adopt or reject the proposed patches. In the second phase, we tell participants that some of the proposed patches were generated by security specialized tools (even if the tool was actually a `normal' APR tool) and measure whether the human reviewers would change their decision to adopt or reject a patch. Limitations: The experiment will be conducted in an academic setting, and to maintain power, it will focus on a limited sample of popular APR tools and popular vulnerability types

    Assessment of Automated (Intelligent) Toolchains

    No full text

    Assessment of Automated (Intelligent) Toolchains

    No full text
    [Background:] Automated Intelligent Toolchains, which are a composition of different tools that use AI or static analysis, are widely used in software engineering to deploy automated program repair techniques, or in software security to identify vulnerabilities. [Overall Research Problem:] Most studies with automated intelligent toolchains report uncertainty and evaluations only of the individual components of the chain. How do we calculate the uncertainty and error propagation on the overall automated toolchain? [Approach:] I plan to replicate research case studies to collect data and design a methodology to reconstruct the overall correctness metrics of the toolchains, or identifying missing variables. Further confirmatory experiments with humans will be performed. Finally, I will implement an artifact to automate the overall assessment of automated toolchains. [Current Status:] A preliminary validation of published studies showed promising results
    corecore